Skip to content

Conversation

@mttbernardini
Copy link

I'm working on remodernising the library to:

  • Adhere to the latest packaging standards (i.e. pyproject.toml) and making the source structure more predictable (src/ tree convention)
  • Ensure type annotations are up-to-date and coherent with runtime behaviour - in the current form type annotations are not picked up at all, becase top-level .pyi files are not supported.
  • Leverage CI to automate boring repetitive tasks, e.g. documentation generation and publishing under gh-pages, packaging and uploading to PyPI sdist and wheels (there's only sdist on PyPI currently), etc.

This will be a metadata-rework only, no new feature added to the library itself nor breaking changes expected.

Opening the PR for any early feedback, but this is still WIP.

@larsimmisch
Copy link
Owner

Thank you. I appreciate it. I'm hoping to pull your changes, build locally and educate myself about uv.lock this week.

@mttbernardini
Copy link
Author

Thanks! As for uv, it's an arbitrary choice up to discussion (I usually use pdm at work, but wanted to give uv a try since it's faster and written in Rust). For this library, it's just to "pin" versions of the dev-tools used (i.e. sphinx, pyright, mypy) in order to make CI reproducible (e.g. avoid checks going red just because a dev-tool has a new release and introduces new behaviour), the library itself has no external dependencies otherwise.

For consumers of the library, uv (or any other chosen front-end) has zero influence. The library is still built via setuptools like before.

@RonaldAJ
Copy link
Collaborator

RonaldAJ commented Aug 9, 2025

Is my understanding correct that the mypy, pyright and sphynx dependencies are purely for testing and generating documentation and do not impact deployment in any way?

@mttbernardini
Copy link
Author

Is my understanding correct that the mypy, pyright and sphynx dependencies are purely for testing and generating documentation and do not impact deployment in any way?

They're "dev" tools for linting/testing and doc generation, so they're not exposed to end users (the goal is still to have the library having zero runtime dependencies).

However, as for "impact deployment", if we automate the process with CI, ideally we could use lint/tests to prevent deployment (or PR merge) if CI checks are red. This could also be done in a "soft" fashion (ie. PR checks only for indication, but the PR can still be merged even if red, and deployment doesn't run any additional linting).

I didn't write any GitHub workflow to implement automatic deployment behavior yet (which up to now I believe was done purely manually?)

@RonaldAJ
Copy link
Collaborator

However, as for "impact deployment", if we automate the process with CI, ideally we could use lint/tests to prevent deployment (or PR merge) if CI checks are red. This could also be done in a "soft" fashion (ie. PR checks only for indication, but the PR can still be merged even if red, and deployment doesn't run any additional linting).

My concerns were purely the install time and run time.

I am not sure whether setting up CI is worth the effort. For two reasons I have no insight in the effort required and the amount of code changes on this repository seems moderate to me. Also I would expect that CI brings its own maintenance costs.

Modernization of the build method is something becoming unavoidable soon, so I am happy you take it on.

@mttbernardini
Copy link
Author

mttbernardini commented Aug 10, 2025

I am not sure whether setting up CI is worth the effort.

CI should deliberately be simple: the advantage is to let contributors put their (limited) time on actual development rather than boring devops. Moreover providing wheels gives the additional benefit to users to achieve true "zero runtime dependencies", since not only they wouldn't need ALSA headers installed nor a compiler toolchain, but they can also forgo installing libasound2 since it can be bundled with the wheel (as common practice for manylinux wheels). This would solve issues like #96 for end users.

Bulding wheels for manylinux by hand is definitely doable but not worth the effort, since manylinux requires some caveats about GLIBC version compatibility (and that's probably why I only see sdists on PyPI for pyalsaaudio). Letting CI do that is definitely easier.

I'll try to get some time this week to setup an as-simple-as-it-gets GitHub Workflow, then we can review if worth adopting.

@mttbernardini
Copy link
Author

I forgot to mention: by CI I mostly mean documentation generation and PyPI publishing of sdists and wheels, i.e. the tasks that are boring but have direct impact on final users (affects how updated are the versions on PyPI and the online documentation).

As "effort" is concerned, we could leave testing/linting out of the picture and just leave some indications on how to run them purely for the convenience of the developers. Not doing any CI around that would definitely reduce maintenance costs.

For reference, I used stubtest to check whether the .pyi stubs where consistent with the runtime definitions (although stubtest is pretty simple and doesn't test much) and pyright just for general type checking of the stubs (captured missing self parameter in methods). Definitely stuff that can be done by hand at own convenience, no need to build anything fancy around those.

@larsimmisch
Copy link
Owner

larsimmisch commented Aug 10, 2025

I forgot to mention: by CI I mostly mean documentation generation and PyPI publishing of sdists and wheels, i.e. the tasks that are boring but have direct impact on final users (affects how updated are the versions on PyPI and the online documentation).

That makes sense. There are no real tests that could be run on a github CI server (maybe this would even be possible, but I don't see how it could be done without libasound2 and a probably a specialised /etc/asound.conf)

As "effort" is concerned, we could leave testing/linting out of the picture and just leave some indications on how to run them purely for the convenience of the developers. Not doing any CI around that would definitely reduce maintenance costs.

+1

For reference, I used stubtest to check whether the .pyi stubs where consistent with the runtime definitions (although stubtest is pretty simple and doesn't test much) and pyright just for general type checking of the stubs (captured missing self parameter in methods). Definitely stuff that can be done by hand at own convenience, no need to build anything fancy around those.

Creating a github workflow would definitely be nice, but some concrete examples for generating artifacts (sdists, wheels, documentation etc.) would help me get a better understanding (or put it into the README.md). As it stands, I am the only one who does releases, and I'd like to understand how to do it manually, too.

@mttbernardini
Copy link
Author

mttbernardini commented Sep 3, 2025

I made two GitHub workflows:

I guess the missing bit now is to write some minimal doc on how to build and publish to PyPI manually from local environment, but it pretty much boils down to running cibuildwheel locally on a Linux machine with Docker installed to get the wheels, and then using either twine or uv to upload the wheels.

@larsimmisch
Copy link
Owner

I installed uv/uvx a few days ago and ran sudo uvx cibuildwheel (sudo because of my docker installation), but I got an error (below).

@mttbernardini if you have an idea, let me know (I can also post full logs, but this was on a pretty standard raspian install on your branch)

Error: 
Build failed because a pure Python wheel was generated.

If you intend to build a pure-Python wheel, you don't need
cibuildwheel - use `pip wheel .`, `pipx run build --wheel`, `uv
build --wheel`, etc. instead. You only need cibuildwheel if you
have compiled (not Python) code in your wheels making them depend
on the platform.

If you expected a platform wheel, check your project configuration,
or run cibuildwheel with CIBW_BUILD_VERBOSITY=1 to view build logs.

@mttbernardini
Copy link
Author

I installed uv/uvx a few days ago and ran sudo uvx cibuildwheel (sudo because of my docker installation), but I got an error (below).

@larsimmisch unfortunately I'm not able to reproduce. I run the same command as yours (which pulls cibuildwheel==3.2.1, while so far I've been using and pinned on the makefile cibuildwheel==3.1.4 - although there doesn't seem to be any practical difference between the two) and generated the following wheels successfully:

16 wheels produced in 2 minutes

  cp38-manylinux_x86_64: pyalsaaudio-0.11.0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 544.8 kB in 6 seconds, SHA256=a9f7f57c866f158b742b67af1a93360b74fe956f696099325c938e5dc5e31716
  cp39-manylinux_x86_64: pyalsaaudio-0.11.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 542.5 kB in 3 seconds, SHA256=445d32d60861ca03ab00da40ba49089e284d2d50258a8a277e8f00d5345ada73
  cp310-manylinux_x86_64: pyalsaaudio-0.11.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 542.8 kB in 2 seconds, SHA256=39fa7e24a432f59bea5b03661a64a7d0546a4f2fcdfe7faf170983509fde3c60
  cp311-manylinux_x86_64: pyalsaaudio-0.11.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 544.1 kB in 2 seconds, SHA256=ce4016bab6918e643ce2cc26a23a3151f34a343e87fe444499a86845b71a3a46
  cp312-manylinux_x86_64: pyalsaaudio-0.11.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 545.7 kB in 2 seconds, SHA256=746db2ac57c28034370c5d161ca26cdf8b320baf6e81bca013a0d16e23fd56fb
  cp313-manylinux_x86_64: pyalsaaudio-0.11.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 545.9 kB in 3 seconds, SHA256=f8a228381abe45e67b8c2388010205e03d3e91af423a7f3c906c54bc9ea22263
  cp314-manylinux_x86_64: pyalsaaudio-0.11.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 545.8 kB in 3 seconds, SHA256=2565b247040a2774b87187019320ece417645bd34e1fec7e1e31557a61948432
  cp314t-manylinux_x86_64: pyalsaaudio-0.11.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl 558.2 kB in 3 seconds, SHA256=f7346bace3b8e1c595d5b0c76c506a7ba8254b5550d4c7d5095bdad59a08acf2
  cp38-musllinux_x86_64: pyalsaaudio-0.11.0-cp38-cp38-musllinux_1_2_x86_64.whl 466.1 kB in 38 seconds, SHA256=86a604e66fe0bd092379542c039577555b1ad93d3247b463c7a854e154f26cd0
  cp39-musllinux_x86_64: pyalsaaudio-0.11.0-cp39-cp39-musllinux_1_2_x86_64.whl 464.5 kB in 8 seconds, SHA256=d1a0c151d21d5b148298272826c2f74b3e9439b2370a0df736108ca2196e4e8c
  cp310-musllinux_x86_64: pyalsaaudio-0.11.0-cp310-cp310-musllinux_1_2_x86_64.whl 464.7 kB in 3 seconds, SHA256=2c046062e7c01813ca937a87fcfae6ecfad7002c8252c0eace21b0a89663a8e6
  cp311-musllinux_x86_64: pyalsaaudio-0.11.0-cp311-cp311-musllinux_1_2_x86_64.whl 465.9 kB in 3 seconds, SHA256=8df0b67094a8dab34f0147143f5be5c9bc6f9d7aeb0d4822ed7d6911de54f816
  cp312-musllinux_x86_64: pyalsaaudio-0.11.0-cp312-cp312-musllinux_1_2_x86_64.whl 467.7 kB in 2 seconds, SHA256=3ace8a88e37a8bef2ce408c0b86132b283b1998f6b465327b5e4c69170b3dac8
  cp313-musllinux_x86_64: pyalsaaudio-0.11.0-cp313-cp313-musllinux_1_2_x86_64.whl 467.8 kB in 3 seconds, SHA256=91e2eb94c5c7057ec27c916be06447978c3a21a7782d465fdb76e5c0afe054e7
  cp314-musllinux_x86_64: pyalsaaudio-0.11.0-cp314-cp314-musllinux_1_2_x86_64.whl 467.6 kB in 4 seconds, SHA256=4420bab8cf75766ed2456b8bf766f06c5f1f641b0fd390e0c1064790a8e5d96f
  cp314t-musllinux_x86_64: pyalsaaudio-0.11.0-cp314-cp314t-musllinux_1_2_x86_64.whl 479.8 kB in 3 seconds, SHA256=29f8802506842d53b0c98f26f4dfff4abe78179b51fd9e950dab1c3029a73863

cibuildwheel recommends building from a clean checkout (git clean -dfx), in case there are leftover artifacts in a build/ directory that could interfere with build isolation. Perhaps you could try and let me know?

For what is worth, my environment is Debian Trixie x86_64, though I doubt it matters, since the build itself runs on predefined docker images.

@larsimmisch
Copy link
Owner

larsimmisch commented Oct 23, 2025

@mttbernardini Good idea with the git clean -xfd. It still didn't work for me, but I noticed that your wheels are all x86_64, while I'm trying to build on an arm64 machine (RPi 5).

Is arm64 in the list of targets?

I'll try on an Intel machine this weekend.

@mttbernardini
Copy link
Author

I'm trying to build on an arm64 machine

Ah I see, I'll try building on arm64 too and see.

cibuildwheel doesn't cross-build, you get wheels for the same host arch. For this reason the GitHub workflow runs both on x86_64 and arm64 to get both sets of wheels (as recommended by their docs).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants